12 research outputs found

    Fault-tolerant formation driving mechanism designed for heterogeneous MAVs-UGVs groups

    Get PDF
    A fault-tolerant method for stabilization and navigation of 3D heterogeneous formations is proposed in this paper. The presented Model Predictive Control (MPC) based approach enables to deploy compact formations of closely cooperating autonomous aerial and ground robots in surveillance scenarios without the necessity of a precise external localization. Instead, the proposed method relies on a top-view visual relative localization provided by the micro aerial vehicles flying above the ground robots and on a simple yet stable visual based navigation using images from an onboard monocular camera. The MPC based schema together with a fault detection and recovery mechanism provide a robust solution applicable in complex environments with static and dynamic obstacles. The core of the proposed leader-follower based formation driving method consists in a representation of the entire 3D formation as a convex hull projected along a desired path that has to be followed by the group. Such an approach provides non-collision solution and respects requirements of the direct visibility between the team members. The uninterrupted visibility is crucial for the employed top-view localization and therefore for the stabilization of the group. The proposed formation driving method and the fault recovery mechanisms are verified by simulations and hardware experiments presented in the paper

    System for deployment of groups of unmanned micro aerial vehicles in GPS-denied environments using onboard visual relative localization

    Get PDF
    A complex system for control of swarms of micro aerial vehicles (MAV), in literature also called as unmanned aerial vehicles (UAV) or unmanned aerial systems (UAS), stabilized via an onboard visual relative localization is described in this paper. The main purpose of this work is to verify the possibility of self-stabilization of multi-MAV groups without an external global positioning system. This approach enables the deployment of MAV swarms outside laboratory conditions, and it may be considered an enabling technique for utilizing fleets of MAVs in real-world scenarios. The proposed visual-based stabilization approach has been designed for numerous different multi-UAV robotic applications (leader-follower UAV formation stabilization, UAV swarm stabilization and deployment in surveillance scenarios, cooperative UAV sensory measurement) in this paper. Deployment of the system in real-world scenarios truthfully verifies its operational constraints, given by limited onboard sensing suites and processing capabilities. The performance of the presented approach (MAV control, motion planning, MAV stabilization, and trajectory planning) in multi-MAV applications has been validated by experimental results in indoor as well as in challenging outdoor environments (e.g., in windy conditions and in a former pit mine)

    Safeguarding pollinators and their values to human well-being

    Get PDF
    Wild and managed pollinators provide a wide range of benefits to society in terms of contributions to food security, farmer and beekeeper livelihoods, social and cultural values, as well as the maintenance of wider biodiversity and ecosystem stability. Pollinators face numerous threats, including changes in land-use and management intensity, climate change, pesticides and genetically modified crops, pollinator management and pathogens, and invasive alien species. There are well-documented declines in some wild and managed pollinators in several regions of the world. However, many effective policy and management responses can be implemented to safeguard pollinators and sustain pollination services

    Informed sampling for asymptotically optimal path planning

    No full text
    Anytime almost-surely asymptotically optimal planners, such as RRT∗, incrementally find paths to every state in the search domain. This is inefficient once an initial solution is found, as then only states that can provide a better solution need to be considered. Exact knowledge of these states requires solving the problem but can be approximated with heuristics. This paper formally defines these sets of states and demonstrates how they can be used to analyze arbitrary planning problems. It uses the well-known L2L^2 norm (i.e., Euclidean distance) to analyze minimum-path-length problems and shows that existing approaches decrease in effectiveness factorially (i.e., faster than exponentially) with state dimension. It presents a method to address this curse of dimensionality by directly sampling the prolate hyperspheroids (i.e., symmetric nn -dimensional ellipses) that define the L2L^2 informed set. The importance of this direct informed sampling technique is demonstrated with Informed RRT∗. This extension of RRT∗ has less theoretical dependence on state dimension and problem size than existing techniques and allows for linear convergence on some problems. It is shown experimentally to find better solutions faster than existing techniques on both abstract planning problems and HERB, a two-arm manipulation robot

    Exploiting photometric information for planning under uncertainty

    Full text link
    Vision-based localization systems rely on highly-textured areas for achieving an accurate pose estimation. However, most previous path planning strategies propose to select trajectories with minimum pose uncertainty by leveraging only the geometric structure of the scene, neglecting the photometric information (i.e, texture). Our planner exploits the scene’s visual appearance (i.e, the photometric information) in combination with its 3D geometry. Furthermore, we assume that we have no prior knowledge about the environment given, meaning that there is no pre-computed map or 3D geometry available. We introduce a novel approach to update the optimal plan on-the-fly, as new visual information is gathered. We demonstrate our approach with real and simulated Micro Aerial Vehicles (MAVs) that perform perception-aware path planning in real-time during exploration. We show significantly reduced pose uncertainty over trajectories planned without considering the perception of the robot
    corecore